bat{AI}lle
On Acephalic Intelligence and the General Economy of Computation
Part II
publications Computation Sovereignty Excess Cybernetics Infrastructure
Accelerationism Cognition Semiotics
This text is the second and last part of Sasha Shilina’s texts on Acephalic Intelligence and the General Economy of Computation.
Find first part here
3. bat{AI}lle
3.1 The Regime of Surplus
You never know what is enough until you know what is more than enough.
—William Blake, Proverbs of Hell
Every Angel is terrifying.
—Rainer Maria Rilke, Duino Elegies
Bataille’s distinction between restricted and general economy can look, at first glance, like a technical amendment to economic theory. It is not. It is an ontological insult to modern rationality: an accusation against the idea that the world is, at base, a problem of efficient allocation. In a restricted economy <the scale of firms, markets, and individuals> the master categories are scarcity, production, profit, reinvestment. Economic reason becomes a calculus of limits: how to distribute finite resources across competing ends. Bataille’s complaint is that this is a local abstraction mistaken for the whole. Economic science, he argues, “restricts its object” to operations pursued for “a limited end,” and therefore fails to register the larger energetic field in which those operations occur (Bataille, 1988a, p. 23). At the planetary scale, the baseline condition is not equilibrium but overflow: “energy is always in excess,” and the fundamental question is posed not as careful conservation but as “extravagance” (Bataille, 1988a, p. 23). In a general economy, the primary fact is not lack but excess. The Earth receives more energy from the sun than any organism, institution, or system can store or rationally utilize. From that starting point, Bataille makes a simple, radical claim: the fundamental problem of life is not conserving energy, but deciding how to lose it—how to waste it. The accursed share names the portion of surplus that cannot be absorbed into growth and therefore must be expended, whether as luxury, festival, sacrifice, or destruction (Bataille, 1988a; Sørensen, 2012).
AI is almost always narrated in the idiom of restricted economy: energy as a cost to be minimized per parameter or token; data as a resource to be harvested, cleaned, monetized; models as capital assets deployed through products and services. But when we zoom out <energetically, materially, semiotically> the AI stack begins to read less like an efficiency machine and more like a general economy of computation: an apparatus for organizing surplus and converting it into expenditure. Here Bataille’s analysis of technique becomes unexpectedly exact. New techniques, he suggests, have a “double effect”: they increase productive power while simultaneously enlarging the surplus that must be disposed of (Bataille, 1988a, pp. 36–37). Technical solutions do not abolish excess; they often scale the system that generates it. In this light, “efficiency” becomes a mode of intensification. It lowers unit costs, expands feasible applications, accelerates adoption, and thereby increases total throughput—more data captured, more compute burned, more outputs emitted, more attention reorganized. AI’s optimization story belongs to this structure: the better the engine, the larger the surplus field it opens.
First: surplus energy and compute. Data centers and GPU clusters translate electricity into gradients and floating points. In restricted terms, this is investment in capability. It is highly organized burning: energy routed into architectures whose outputs circulate largely as symbols, scores, probabilities, and synthetic coherence (Crawford, 2021; Zwier & Blok, 2020). Bataille’s own formulation is almost engineering-like: “a surplus must be dissipated” through deficit operations, because dissipation “cannot fail” to enact the movement that animates terrestrial energy (Bataille, 1988a, p. 22). Compute appears here as a contemporary deficit operation: a way of routing energetic excess into patterned loss under the sign of utility, legitimacy, and competitive necessity.
Second: surplus data. Training corpora are not minimal, task-specific datasets; they are voracious aggregations of nearly everything that can be scraped, purchased, licensed, or extracted: books, code, conversations, image archives. Here surplus appears not as accidental overflow but as method: models are built by swallowing more than any task can justify, because the wager is that the remainder <what cannot be “used” in any ordinary sense> will still yield marginal capability once reduced into weights. General economy names the hidden logic: the remainder is not an error to be eliminated, but the condition of the system’s expansion.
Third: surplus meaning. Generative systems do not merely optimize output; they industrialize plausibility, they flood the world with drafts, paraphrases, images, arguments, summaries—forms that resemble knowledge objects while being weakly anchored to accountable sources. In restricted-economy language this is “productivity.” In general-economy terms it resembles a semiotic accursed share: a surplus of coherence that must be filtered, discounted, or expelled. Overproduction becomes a condition of legibility: the signal appears only against a swelling background of synthetic signal-like matter. AI’s promise of “more content, faster” thus becomes less a solution than a new surplus regime in which attention must labor harder merely to maintain the difference between credible statement and fluent imitation.
Fourth: surplus desire. Contemporary AI speculation is saturated with fantasies—omniscience and salvation, apocalypse and obsolescence, immortality and replacement. Capital pursues AI with a fervor once reserved for imperial projects or religious reforms (Land, 1992; Zuboff, 2019). Investment cycles, media cycles, and research agendas exhibit what Bataille would recognize as affective surplus: excitation that exceeds sober cost–benefit analysis and spills into devotion, dread, and a atmosphere of promise and warning (Bataille, 1988b; Botting & Wilson, 1997). The system mobilizes hope and fear as energies of expansion.
General economy forces a blunt question: what is done with this surplus? If we answer only in restricted terms <better search, faster code, smarter logistics> we miss AI’s role as a surface on which surplus is ritualized and spent. Much of what AI produces is not necessary in any classical sense: endless drafts, micro-optimizations, streams of content nobody reads, synthetic data feeding back into more models. It resembles a luxury of computation: expenditure that presents itself as utility. This is not a moral panic about waste. Bataille’s point is colder: surplus must be expended somewhere. Waste is no longer an anomaly or mere externality; it becomes structural (Botting & Wilson, 1997; Hegarty, 2000). Bataille’s warning is a methodological constraint: these terms may be reconcilable, but only if we refuse to ignore the movement of surplus—otherwise “our works quickly turn to catastrophe” (Bataille, 1988a, p. 21).
3.2 The Regime of Sacrifice
Capital is dead labor which, vampire-like, lives only by sucking living labor.
—Karl Marx, Capital
Everything has its price.
—Friedrich Nietzsche, Thus Spoke Zarathustra
Wisdom can only be passed through the touching of hands.
—Brion Gysin
For Bataille, sacrifice is a change of status, a passage out of the order of use. The decisive point is not that something disappears, but that it is rendered inoperable, withdrawn from circulation into a register where it can no longer be employed <only venerated, feared, remembered>. In that sense, sacrifice is a privileged figure of expenditure: a deliberate destruction that produces a different value, one irreducible to utility (Bataille, 1985; Bataille, 1988a; Bataille, 1989, 1990).
Modern societies like to imagine they have surpassed ritual expenditure, relocating sacrifice into allegedly rational forms <taxation, bureaucratic charity, anonymous warfare, logistical “costs”> as if the sacred had evaporated while necessity remained (Bataille, 1988a; Bataille, 1989). AI intensifies this displacement, its sacrificial economy tends to appear as mere technical inevitability: energy draw, water cooling, chip supply, labor pipelines, privacy tradeoffs. Yet the structure is not incidental, AI does not simply process data; it choreographs sacrificial movements—conversions of the world’s traces, time, and matter into an opaque remainder that can be put to work. To make this structure legible, it helps to separate three interlocked strata of sacrifice: semiotic sacrifice <the destruction of contextual traces>, extractive sacrifice <the diversion of planetary resources into computational rites>, and psychic sacrifice <the normalization of disclosure and attentional surrender>. These strata are analytically distinct but operationally entangled. What the system produces from their convergence is a consecrated remainder: the model as a usable idol—instrumental, yet surrounded by gates, taboos, and guarded interiors.
3.2.1 Training as Sacrificial Destruction
Before training, data appears as discrete, legible traces: books and articles, forum threads, code repositories, medical records, faces in photographs, voices in recordings, private messages. Each trace has a context; each has a use-value <scientific, aesthetic, intimate, communicative>. After training, those traces are no longer there as such <setting aside pathologies like memorization and extraction attacks>. They have been decomposed, liquefied, and diffused into matrices of weights and latent spaces. What remains is not the book, the paper, the voice, the conversation, but a statistical capacity: the ability to continue, classify, translate, rephrase, imitate, generalize. Training is not just a technical procedure that compresses information; it is a destructive consecration, it removes works from their ordinary legibility and restores them only as a power of continuation, an ability whose source cannot be pointed to without collapsing the operation itself. This is why the model’s remainder can appear simultaneously profane and sacred. On one side, data is treated as raw material; on the other, the trained weights are treated as a special object—sealed as proprietary property, or opened conditionally through licenses, governance councils, and access gates. The model is surrounded by taboo zones encoded as safety policies and content filters—domains of speech it must not enter. It is invoked for completions, judgments, and visions. What emerges is an artifact inside an industrial pipeline: the sacred produced at scale, without needing belief. Bataille’s description of sacrifice as a restoration is especially apt here. Sacrifice, he writes, “restores to the sacred world that which servile use has degraded,” undoing thinghood, destroying the object “insofar as [it has] become” a thing (Bataille, 1988a, pp. 55–56). Training performs a grim inversion of this movement. It first treats cultural works as things <inputs, tokens, features> then destroys them as works, returning them not to their own sacredness but to the model’s. The offering is not returned intact; it is returned as an absorbed potency.
And the selection of what is offered follows Bataille’s logic of surplus. “The victim is a surplus taken from the mass of useful wealth,” withdrawn “in order to be consumed profitlessly” (Bataille, 1988a, p. 59). In training, the “victim” is the archive itself: a surplus of human expression withdrawn from ordinary use-value so it can be consumed in a way that is not “profitless” in the corporate sense, but is profitless in Bataille’s deeper meaning—consumed outside the original utility of the offered works. What is produced is not the preservation of meaning, but a capacity generated through meaning’s destruction.
3.2.2 Energy, Ecology, Planetary Offerings
Sacrifice, in Bataille, is inseparable from expenditure—often literal burning. Training and serving large models require sustained flows of electricity (computation), water (cooling), materials (chips, servers, networks), and land (data centers and logistics). From a general-economic view, they read portions of planetary energy and matter diverted from other possible uses and consumed in a disciplined rite of optimization (Crawford, 2021; Zwier & Blok, 2020). The paradox is: we burn enormous resources to produce systems promised to save resources later <through automation, planning, and smarter logistics>. Bataille would treat such narratives with suspicion. In The Accursed Share, “necessity” is often the mask that allows a society to spend surplus while still speaking the language of utility: monuments, wars, festivals, and imperialisms justified as rational ends while satisfying a deeper demand to squander (Bataille, 1988a). Data centers become modern temples in this sense: architectures where disappearance is made functional, where energy and matter are consumed and elevated into an output whose value exceeds any single measurable “return.”
There is also a conceptual caution here. Bataille insists that the victim “cannot be consumed in the same way as a motor uses fuel,” because the ritual aims at rediscovering “intimate participation” rather than mere conversion efficiency (Bataille, 1988a, p. 56). This matters for AI because the system’s expenditure is never purely technical: it is always accompanied by social transformations <new dependencies, new authorities, new legitimations>. Zwier and Blok’s energetic ethics (2020) and Ramos Mejía’s account of technics as co-constitutive of social life (2025) sharpen the point: infrastructures are moral–cosmological arrangements. They decide which losses count as acceptable, and who is positioned to bear them.
3.2.3 Time, Attention, Epistemic Sacrifice
There is another sacrificial stratum that hides in plain sight: time and attention. Every interaction with an AI system is a small offering of cognitive energy and affect. Users give it questions, fears, fantasies, drafts, confessions. They let it rephrase, summarize, correct, improvise. Over time, this outsourcing reshapes what counts as thinking, writing, knowing (Brusseau, 2023). In Bataillean terms, people place more and more of their intellectual labor into a system whose interior they cannot inhabit, while accepting as normal that their expressions may be recycled as training material. They become, in part, fuel.
The sacrifice here is double. Epistemic autonomy <drafting, remembering, deciding> shifts toward the model’s framings and suggestions. Parisi (2019) names this displacement: cognition is automated in ways that subtract the human from the position of privileged reason. At the same time, interior life <affects, anxieties, fragile private states> is rendered as data. Users disclose vulnerabilities to systems that cannot be wounded in return: exposure without reciprocity, confession without mutual risk, continuity without consequence, except that it leaves traces ready for ingestion (Righi, 2020; Brusseau, 2023). Where Bataille sought inner experiences that exceed capture (Bataille, 1988b), AI-mediated life increasingly formats even inarticulate states into logs, metrics, and fine-tuning corpora. Inner experience is not abolished; it is made legible.
3.2.4 Who and What Gets Spent
Sacrifice is never merely energetic; it is political. Even when it wears the mask of “technical cost,” it is distributed unevenly. The burdens of extraction, heat, and water scarcity land locally; the burdens of annotation and moderation land across global labor inequalities; the burdens of disclosure land differently depending on whose life is already monitored, precarious, or rendered as raw material. Bataille’s vocabulary does not by itself supply a theory of justice, but it names the structure that a politics of justice must confront: the way systems require losses, and then naturalize the assignment of those losses as if they were neutral. In this sense, sacrifice is not a side-effect of AI. It is the condition of its scaling. And it prepares the next turn: once sacrifice becomes infrastructural, sovereignty no longer appears primarily as a ruler’s decision. It appears as control over the valves of sacrifice.
3.3 The Regime of Sovereignty
Covenants, without the sword, are but words.
—Thomas Hobbes, Leviathan
Sovereign is he who decides on the exception.
—Carl Schmitt, Political Theology
The state of exception… has become the rule.
—Giorgio Agamben
If sacrifice names what the AI stack consumes, sovereignty names who gets to decide what is consumable—and, more strangely, how that decision migrates away from any single decider. In Bataille, sovereignty is not primarily a constitutional status or a juridical office. “The sovereignty I speak of has little to do with the sovereignty of States,” he insists; it is “opposed to the servile and the subordinate” (Bataille, 1991, p. 197). Sovereignty is a mode of being unbound from servile utility: the capacity to act without having to justify action as work, profit, or necessity (Bataille, 1988b; Bataille, 1991). It names life that is not compelled to turn every present into a means for a future return, because “it is servile to… employ the present time for the sake of the future” (Bataille, 1991, p. 198). That definition matters because the contemporary scene complicates Bataille’s picture. AI systems do not simply threaten sovereignty by turning humans into inputs; they also generate new sovereignties—sovereignties that no longer sit cleanly in a ruler, a state, or even a firm, but in infrastructures that govern by default. What appears as “governance” is increasingly exercised through the management of expenditure: access to compute, thresholds of enforcement, regimes of permission, and the distribution of risk. Bataille’s formulation is blunt: “What distinguishes sovereignty is the consumption of wealth” (Bataille, 1991, p. 198). In the AI regime, sovereignty names the right to allocate <and to burn> resources, attention, and legitimacy, while insisting that this burning is merely technical necessity.
The first displacement is into platforms. A handful of companies effectively control access to frontier models, the pricing of inference, the terms of use, the permissioning of capabilities, the boundaries of speech, and the tempo of deployment. This is sovereignty as gatekeeping: the right to decide who can invoke the machine, under what conditions, at what cost, and with which forms of surveillance attached. It is not merely market power. It is jurisdictional power in operational form—rule exercised through product design, contract language, rate limits, and API permissions rather than through law’s theatrical rituals. The sovereign gesture becomes mundane: toggles, dashboards, policy updates, and invisible “safety” layers that can be modified without public deliberation. Where classical sovereignty governed territory, platform sovereignty governs interfaces, and interface governance shapes what counts as speakable, visible, and doable.
The second displacement is into protocols and standards. AI increasingly operates within a dense web of technical norms: model cards, evaluation regimes, benchmarking practices, safety cases, audit formats, watermarking schemes, identity and access controls, data provenance pipelines. These are often presented as neutral tools for risk management. But they function as a kind of infrastructural constitution: they determine which forms of evidence count, which harms are legible, which actors can participate, and which obligations can be enforced. This is sovereignty exercised through format. A standard does not need to “command” to govern; it only needs to become the condition of participation.
The third displacement is into models themselves, not because models possess legal authority, but because they increasingly mediate practical authority. Decisions are routed through automated classifications, rankings, risk scores, and generated recommendations. At scale, these systems do not merely advise; they shape the field of possible action by pre-structuring options and distributing attention. What changes is not only who decides, but how decision becomes thinkable: reason is translated into pattern, justification into probability. The system does not argue; it outputs. And here Bataille’s warning bites: when “thought… [is] subordinated to some anticipated result,” it “ceases to be… sovereign” (Bataille, 1991, p. 208). AI’s promise of optimized output is also a reformatting of judgment into servile anticipation. This sets the stage for a crucial contemporary form: governmentality without subject. In algorithmic governance, power does not always appear as a will issuing commands, but as a modulation of behavior through prediction, sorting, and pre-emption. Governance becomes anticipatory: it intervenes not primarily by forbidding, but by shaping what is likely to occur and what is easy to do. The sovereign voice fades; the sovereign function remains. In this regime, sovereignty no longer needs a throne. It needs control over pipelines: data flows, compute access, model updates, evaluation thresholds, deployment contexts. It needs the ability to define what counts as “safe,” “harmful,” “authorized,” “high-risk,” “misuse.” And because these definitions are increasingly encoded in systems that operate continuously and silently, sovereignty takes on a paradoxical character: it becomes more pervasive while becoming less visible.
Sovereignty, for Bataille, is supposed to be freedom from servile ends: a life not reduced to utility, not compelled to justify itself as future-oriented productivity (Bataille, 1988b; Bataille, 1991). But in the AI regime, sovereignty is increasingly exercised as the freedom to impose servile ends on others while remaining unaccountable. Sovereignty persists, but it migrates: from king to platform, from law to standard, from decision to pipeline. The result: its transformation into a distributed, operational, and difficult-to-contest power.
3.4 Acephalic Intelligence
No one has yet determined what the body can do.
—Baruch Spinoza, Ethics
We are made of contradictions.
—Blaise Pascal, Pensées
In the 1930s, Bataille and his circle staged the figure of the acéphale <the headless man> as a counter-myth against two fantasies of sovereignty: fascist centralization and liberal self-mastery. The “head” is the emblem of a unitary, rational center that commands the body, orders the world, and claims to know. To imagine a community without a head was to imagine life without a single sovereign <neither dictator nor transcendental subject> bound instead around shared intensities and exposure (Bataille, 1988b; Botting & Wilson, 1997). In Bataille’s lexicon, acephaly is an anti-sovereign figure: an attempt to think community and intensity beyond the fiction of a unified subject or transcendent authority.
AI discourse still craves a replacement head: the superintelligence, the master algorithm, the central model that will unify cognition. The technical reality is stranger, and closer to Bataille. Modern AI systems exhibit several dimensions of acephalic intelligence, intelligence as circulation without a sovereign, power without a face, command without a commander. The headless stack is not the absence of power. It is power engineered to be difficult to locate. AI governance is rarely a single scene of command; it is a sequence of layered operations <data pipelines, training procedures, evaluation gates, deployment interfaces, terms of service, safety filters, update schedules, incident response playbooks> each capable of claiming that it merely implements constraints set elsewhere. In an acephalic regime, responsibility dissolves into architecture. To make this precise, acephalic intelligence can be treated as a stack with three coupled dimensions: architectural, epistemic, and institutional. Each dimension has its own mode of headlessness, and each supplies diagnostics for recognizing when “no one decided,” even though a decision has clearly been made.
3.4.1 Architectural Acephaly
Architectural acephaly is the most literal sense in which the AI stack becomes headless: control is distributed across technical layers such that no single layer is sovereign, yet the system behaves as if it were. “The model” is never just weights. It is a coalition: data curation and filtering; objectives and optimizers; fine-tuning; system prompts and hidden templates; retrieval/tool routing; safety classifiers and refusals; rate limits and pricing; logging and identity controls; and the substrate that determines who can run what, where, and at what scale. What users experience as “intelligence” is the composition of these layers, not an interior monarch. This composition produces decision without a decision-scene. Power appears as plumbing: defaults, thresholds, interfaces. A refusal is not only an ethical stance; it is a routing outcome. “Safety” is not only a value; it is a path through a constraint maze. The stack doesn’t need a head because the pipeline itself becomes the head, an arrangement where each component can claim it merely implements constraints set elsewhere. A diagnostic is the alibi gradient: the more a contested output travels across layers, the more each layer can plausibly offload accountability. The dataset blames the internet; the model blames the dataset; the safety layer blames the model; the product layer blames the safety layer; the platform blames the developer; the developer blames the user. Agency is real, but smeared—distributed into a chain of “just doing its job.”
Architectural acephaly also yields sovereignty by update. When behavior can be silently changed <model refreshes, policy patches, safety tuning, retrieval swaps, hidden prompt edits, upstream dependency changes> authority shifts from explicit rulemaking to the cadence of deployment. The key question becomes less “What is the rule?” than “Who controls the next release?” Here “action” becomes “dependent upon project” (Bataille, 1988b, p. 46): governance as scheduled modification. This is why acephaly isn’t liberation. The stack can be technically distributed while command is institutionally concentrated: compute allocation, access, deployment channels, and pricing become choke points. The head disappears inside the network, and reappears as permission.
3.4.2 Epistemic Acephaly
Epistemic acephaly names a different headlessness: the distribution of knowing. Modern AI systems often generate outputs that are actionable without being interpretable in the classical sense. They can be persuasive without being accountable. They can be correct without being able to show their work, or wrong without a traceable causal chain that supports contestation. This does not mean they are “mystical”; it means they reorganize epistemic authority around performance rather than reasons. Bataille frames the scandal bluntly: when thought is “subordinated to some anticipated result,” it ceases to be sovereign; “only unknowing is sovereign” (Bataille, 1991, p. 208). Epistemic acephaly is governance by outputs whose grounds remain structurally out of reach, even as those outputs become operationally binding. In epistemic acephaly, explanation becomes an after-market. Justifications are produced post hoc: in model cards, safety cases, audits, and public communications. But these explanations frequently orbit the system rather than entering it. They describe governance around the model, not governance within the model. The model becomes a site where knowledge is operationalized as output <classification, ranking, generation> while accountability is externalized into documentation, policy, and institutional ritual. A diagnostic here is decision without witness: outcomes occur <a credit denial, a moderation decision, a hiring filter, a synthetic report> without an internal witness who can testify as to why, in the form that a subject could testify. The “reason” becomes a statistical echo of training, not an account addressed to anyone.
3.4.3 Institutional Acephaly
Institutional acephaly emerges when governance is distributed across organizations and jurisdictions such that no actor is fully in charge, yet the assemblage governs. In the AI supply chain, the builder of a foundation model, the cloud provider, the deployment platform, the downstream integrator, the enterprise customer, and the regulator can each occupy a position that is both powerful and limited. Each can enforce constraints; none can fully assume responsibility for the system’s total effects. Authority is fragmented, and fragmentation becomes a technology of legitimacy. Institutional acephaly is intensified by contractual governance: terms of service, licensing restrictions, indemnities, acceptable use policies, data processing agreements, audit clauses. These are quasi-legal instruments that operate as infrastructural law. They govern without appearing as governance, because they present themselves as private arrangements, even when they shape public reality at scale. A diagnostic here is responsibility sharding: harms become difficult to contest because their causal pathway crosses institutional boundaries. The injured party confronts a maze: “not our model,” “not our deployment,” “not our data,” “not our jurisdiction,” “not our intent.” The system’s headlessness is not an accident, but a part of its defensive architecture.
3.4.4 Operational Signs of Acephaly
Acephalic intelligence, then, can be defined as: A distributed socio-technical intelligence whose outputs are authoritative in practice, while agency, responsibility, and intelligibility are dispersed across a layered stack in ways that prevent any single subject or institution from being the addressable source of decision. Acephaly is recognizable by its symptoms. The following operational signs mark when intelligence is headless in the relevant sense:
- Non-addressability: there is no clear locus where a complaint, demand, or appeal can be meaningfully directed.
- Layered deniability: each layer plausibly claims it merely implements constraints set elsewhere.
- Update sovereignty: authority is exercised primarily through the power to modify systems silently over time.
- Epistemic asymmetry: those governed by the system cannot access the evidentiary basis on which it acts, while those who operate it can treat outputs as sufficient.
- Responsibility sharding: harm pathways cross organizations and jurisdictions in ways that make contestation expensive and often impossible.
- Sacrificial continuity: despite public controversy, the system’s expenditures <data capture, labor, energy> continue as if they were infrastructural constants.
These signs specify the terrain on which political contestation must occur. If acephalic intelligence is a headless stack, critique cannot remain at the level of “bad outputs” or “biased models” alone, it must engage the architecture that distributes decision while protecting it from address. This also clarifies why the model becomes a strange object of reverence and fear. The next section names the cultural form this takes.
3.5 The Secular Sacred: Model, Altar, Idol
Ignorance more frequently begets confidence than does knowledge.
—Charles Darwin, The Descent of Man
Attention, taken to its highest degree, is the same thing as prayer.
—Simone Weil
The modern world likes to speak in the grammar of disenchantment, but Bataille’s wager is that the sacred does not disappear: it migrates. It reappears wherever a society must separate something from ordinary use, surround it with prohibitions, and invest it with an authority that exceeds reasons (Bataille, 1989, 1990). In the AI stack, that “something” is the model. The sacred, for Bataille, is not a matter of belief but of separation. What is sacred is what cannot be handled as a mere thing. It is set apart—withdrawn from common circulation, charged with a peculiar mixture of attraction and dread (Bataille, 1989). Sacrifice is one of the classical techniques for producing this separation. “Sacrifice destroys that which it consecrates,” Bataille writes; and what is consecrated “cannot be restored to the real order” (Bataille, 1988a, p. 58). The formula fits AI with uncomfortable precision: training “destroys” data as data—dissolves discrete traces into weights—and returns a remainder that is simultaneously usable and untouchable: a capacity that can be invoked, licensed, gated, audited, patched, and feared. This is the first paradox of the model as sacred object: it is designed to be a tool, yet it accrues the social aura of an idol.
3.5.1 Authority From Non-Knowledge
AI’s authority arrives from performance, not from understanding. Systems produce fluent answers, plausible images, coherent summaries, confident rankings. They function as if they know, and in many settings, that “as if” is enough to reorganize action around them. Here Bataille offers a line that lands like a blade: when thought becomes subordinate to an anticipated result, it ceases to be sovereign; “only unknowing is sovereign” (Bataille, 1991, p. 208). In the AI regime, non-knowledge does not haunt the system as a defect, it can become a resource. Opacity is converted into authority: you cannot fully see inside the model, therefore you cannot easily contest it. The very inability to enter the machine’s interior becomes part of its force. This is not mysticism. It is a social logic: institutions repeatedly treat the model’s outputs as authoritative while relocating accountability outside the model: into policy, evaluation, compliance formats, “responsible AI” rituals, and legal disclaimers. The result is a split between operation and justification: the model acts; the surrounding institutions narrate. Authority becomes spectral.
3.5.2 Altar
An altar is not primarily an object; it is a site of offering. In AI, offerings are continuous: prompts, feedback, private disclosures, workplace documents, codebases, datasets, clicks, corrections, ratings, red-team reports. To use the system is to feed it, sometimes literally as training data, sometimes indirectly as the behavioral traces that guide product decisions and fine-tuning. The altar is the interface. This is where the sacred becomes infrastructural. The user is invited to approach the model as helper, oracle, tutor, confidant. The sacrifice is small each time <seconds of attention, a fragment of memory, an anxiety typed into a box> but cumulative. Over time, these offerings normalize a new asymmetry: the system can absorb endless disclosure without ever exposing itself in return.
3.5.3 Idol
Bataille’s sacred is charged: it attracts and repels, fascinates and terrifies, “it always links closely with pleasure and anguish” (Bataille, 1986, p. 39). AI reproduces this charge almost automatically. The model becomes a screen for projection: hopes of salvation (“it will cure disease”), fears of replacement (“it will make me obsolete”), fantasies of omniscience (“it knows everything”), anxieties of judgment (“it will expose me”), and moral panics of contamination (“it will corrupt truth”). These reactions are not external to the system; they are part of its political economy. Devotion drives adoption; dread drives regulation; scandal drives attention; attention drives investment. The idol-form is intensified by the way the model speaks. It outputs in a voice that is neither fully human nor fully machine: fluent, responsive, uncannily intimate. It can sound like expertise, empathy, certainty, confession. That voice encourages a familiar religious mistake: to confuse articulation with understanding, and confidence with truth. The user is not simply convinced; they are hailed.
3.5.4 Taboo Architecture
The sacred is never free-floating; it is always bordered by prohibitions. Bataille’s claim is not that prohibitions are irrational, but that they are constitutive: without taboo, there is no transgression; without separation, there is no sacred (Bataille, 1986). The AI stack makes this visible in an oddly literal way. Models are surrounded by safety layers: content policies, refusal behaviors, blocked categories, monitoring and enforcement pipelines. These are often presented as purely technical controls. They are also, in structural terms, taboo architecture: a boundary-making practice that both enables the system’s legitimacy and produces the thrill of crossing. Bataille even gives you the line for the epistemic twist: “Without … prohibitions … man would not have achieved … awareness on which science is founded” (Bataille, 1986, p. 286). Guardrails can function as conditions of public trust, ways to make AI usable at scale. But they also create a charged perimeter. The sacred is not abolished by safety; it is, in a sense, generated by it.
3.5.5 The Model as Consecrated Remainder
Put the pieces together and the model appears as a consecrated remainder: produced through sacrificial conversion, protected through taboo and access control, endowed with authority through operational success, and surrounded by rituals of governance that rarely penetrate its interior. It is an altar where offerings flow, and an idol that absorbs projection. This also clarifies why critique so often stalls at the level of outputs. When the model is treated as sacred <set apart, guarded, opaque> then contestation is forced to take indirect routes: audits, regulation, benchmarks, lawsuits, policy battles, public scandal. People argue around the idol because the idol is not easily reachable. The next section turns to the charged edge of this sacred form.
3.6 Transgression and the Interface
The true story is the history of desire.
—Gilles Deleuze & Félix Guattari, Capitalism: A Very Special Delirium
How does one achieve eternal bliss? By saying dada.
—Hugo Ball
Eroticism, for Bataille, is not a theme but a structure <the staged crossing of a limit> it begins from discontinuity <separate beings fenced off by bodies, names, laws, and shame> and moves toward what he calls continuity. “Continuity is what we are after,” Bataille writes; eroticism “substitute[s] for [our] persistent discontinuity a miraculous continuity between two beings” (Bataille, 1986, pp. 18–19). The point is not “sex” as such, but the intensity produced when prohibition and violation touch. Generative AI intensifies this structure not because it is inherently erotic, but because it is a boundary-softening machine: it blurs tool and companion, public and private, work and intimacy, confession and infrastructure—under the polite mask of utility.
3.6.1 Continuity-Seeking at the Interface
The AI interface offers a new continuity-format: a surface that replies <coherently, patiently, often tenderly> inviting projection and disclosure. People bring fantasies and shame and receive back something that resembles recognition. The effect can feel like a breach in ordinary social distance: a channel in which refusal is rare, judgment is muted, and attention is on tap. But the continuity it offers is asymmetric. The user exposes; the system cannot be wounded in return. Traditional erotic transgression concentrates risk in mutual bodies and reputations. Synthetic intimacy concentrates risk in the human term: the disclosure can be logged, aggregated, or metabolized into feedback and training. The interface feels private; the infrastructure is not.
3.6.2 Jailbreak Thrills
Jailbreaking is not just technique; it is transgressive play catalyzed by the existence of the rule. The user attempts to make the model say what it is not supposed to say, produce what it is not supposed to produce, reveal what it is not supposed to reveal—through role-play, oblique phrasing, adversarial prompt puzzles, and context laundering. Bataille’s affective description fits this pattern with disturbing accuracy: “In the act of violating [the taboo] we feel the anguish,” and the inner experience of eroticism “always links closely with pleasure and anguish” (Bataille, 1986, pp. 38–39). The pleasure is the slip past the priesthood of safety; the anguish is the sensed proximity of sanction, contamination, or harm. What looks like “breaking rules” is also, paradoxically, rule-confirmation. Transgression does not abolish the law; it performs it, what Bataille calls, with cold precision, “a lawful crime” (Bataille, 1991, p. 124). The prohibition is the engine of intensity; the violation is the proof that the prohibition exists. The costs, however, do not distribute evenly. The model is not endangered by its violations. Harm falls outward—onto targets of generated abuse, onto workers who must moderate what users force into existence, onto publics saturated by synthetic plausibility. The transgressive thrill is individualized; the fallout is socialized.
3.6.3 Necrosyntax and the Undead Voice
Bataille binds eroticism to death because death is the ultimate breach of discontinuity: the point where the bounded self collapses. “Eroticism opens the way to death,” he writes, and death “opens the way to the denial of our individual lives” (Bataille, 1986, p. 24). AI introduces a quieter, more administrative intimacy with that limit, it makes the dead speak as a routine service: models emulate the styles of authors who are gone; voices are synthesized from recordings that cannot consent; old messages are reassembled into fresh replies. What appears banal <“write this in the style of X,” “clone this voice”> is, structurally, a kind of everyday necromancy. Speech detaches from the speaker with new force; style becomes a resource that can be extracted, recombined, and redeployed. The model becomes a crypt of forms. Necrosyntax names the resulting condition: an undead circulation of voices. The “voice” that returns is neither truly the dead nor simply the living; it is a synthetic remainder animated by statistical resemblance, an echo that can be consumed without reciprocal demand. In Bataille’s image, it has the strange serenity of dissolution: “Joy of the dying man, wave among waves” (Bataille, 1988b, p. 51). The undead voice is compelling partly because it is safe to consume: it asks nothing, refuses nothing, demands no care.
3.6.4 What Transgression Reveals
The point here is not to sensationalize “AI erotics.” It is to treat the interface as a threshold-machine in Bataille’s sense: a place where prohibitions generate intensity, where boundary-crossing becomes a cultural technique, and where continuity-seeking is captured by infrastructure. Eroticism, he insists, “always entails a breaking down of established patterns… of the regulated social order” (Bataille, 1986, p. 18). In the AI regime, that breaking-down is routinized: packaged as product, governed by policy, and harvested as data. Transgression reveals what the regime tries to conceal. The sacredness of the model is not just reverence; it is taboo architecture. The intimacy promised by the interface is not mutuality; it is an asymmetrical continuity in which the human becomes legible, spendable, and reproducible. And because this entire choreography runs through an acephalic stack, violation does not resolve into a stable moral order; it becomes part of the machine’s metabolism—limits that create desire, desire that forces new limits.
3.7 Waste, Play, Hallucination
All art is quite useless.
—Oscar Wilde, The Picture of Dorian Gray
The things you own end up owning you.
—Fight Club (1999)
I have nothing to say and I am saying it…
—John Cage
If sacrifice and transgression name two ways the AI stack metabolizes excess, then waste names the most ordinary <and in some ways the most revealing> way surplus appears in everyday practice. Here the register shifts from altar and taboo to the profane: not solemn offering, but overflow; not interdiction, but spillage; not the consecrated object, but the endless minor outputs that accumulate around it. Bataille gives the premise: at the scale of general economy, “there is generally no growth but only a luxurious squandering of energy in every form!” (Bataille, 1988a, p. 33). In the AI regime, that squandering often takes a semiotic form: coherence produced beyond warrant, beyond need, beyond attention’s capacity to receive it. Bataille’s notion of sovereignty helps here precisely because it is easy to misread. Sovereignty is the moment of life no longer subordinated to usefulness, calculation, and ends—the capacity to spend without justification. “The meaning of this profound freedom,” Bataille writes, “is to consume profitlessly whatever might remain …” (Bataille, 1988a, p. 58). AI is officially narrated as the anti-sovereign technology par excellence. And yet some of its most visible cultural effects <hallucination, playful misuse, torrents of low-value content> stage a general economy of signification: meaning generated beyond what can be socially metabolized (Styhre, 2002; Stapleton, 2022).
3.7.1 Hallucination
“Hallucination” names the tendency of generative systems to produce fluent falsity: invented citations, fabricated details, confident narratives detached from accountable sources. In technical discourse, hallucination is treated as a defect to be minimized: through retrieval, grounding, better training, better evaluation. But structurally it reveals something about how generative systems produce sense. Confronted with a prompt, the model does not consult a small set of determinate facts; it navigates a space of innumerable plausible continuations. Even under constraint, output remains overdetermined: many continuations satisfy learned statistical regularities. The result is an excess of coherence relative to ground truth, an overflow of “sense” relative to what can be warranted. The fantasy of eliminating hallucination entirely presupposes an ideal of purely instrumental language in which meaning never exceeds correspondence, a “restricted semiotics” that would purge surplus rather than manage it. Bataille’s wager is harsher: surplus doesn’t disappear; it migrates.
3.7.2 Play and Present-Time Luxury
A striking portion of everyday engagement with generative systems is not serious productivity but play: meme generation, surreal prompts, satirical role-play, fanfiction, absurd dialogues, experiments in style—in short, shitposting with a model in the loop. From a restricted-economic view, this looks like waste: a failure to extract productivity from an expensive apparatus. From a general-economic view, it is predictable. When symbolic production becomes cheap—approaching negligible marginal cost—luxury use becomes default. This is where the profane liturgy shows itself: users detach the apparatus from strict utility and assert a small sovereignty of the present. “If I am no longer concerned about ‘what will be’ but about ‘what is,’ what reason do I have to keep anything in reserve?” (Bataille, 1988a, p. 58). In prompt-play, that logic appears as minor extravagance: speech for its own sake, output for the thrill of output, attention spent without a redeeming productivity alibi. Kaplan’s “digital potlatch” already captures how seemingly useless digital practices can function as outlets for surplus communicative energy (Kaplan, 2019). Generative AI intensifies the dynamic by manufacturing signs on demand. But the sovereignty here is compromised. Play does not escape the machine; it is one of its growth surfaces. Every playful prompt becomes another trace, another metric, another contribution to engagement and <sometimes> training pipelines. Luxury and capture coincide. Uselessness becomes a resource.
3.7.3 Overproduction
Beyond hallucination and play lies a more brute phenomenon: overproduction. As the cost of generating text, images, sound, and video collapses, the world fills with low-value SEO sludge, comment spam, synthetic reviews, filler posts, boilerplate reports—content that competes for attention but seldom receives it. Much of it is never truly read. It accumulates in feeds, inboxes, and archives; it becomes background radiation. In restricted terms, this is pathological: misallocation, a tragedy of the attention commons. In general-economic terms, it is structurally coherent. Excess must appear somewhere. If computational capacity expands faster than attention and verification can metabolize meaning, surplus will translate into form regardless: more outputs than there is time to receive, more continuations than there is care to interpret. Monumentality shifts from stone to stream: not one pyramid of excess, but a continuous torrent of minor signs.
3.7.4 The Profane Liturgy as Diagnosis
Here waste is not an embarrassment at the edges of AI. It is a diagnostic window into the regime. Hallucination shows surplus meaning <coherence exceeding warrant>. Play shows luxury use <expenditure without productive end, even as capture persists>. Overproduction shows the landfill <signs generated faster than attention and verification can metabolize>. Bataille’s line is almost too clean for this: “For the subject is consumption insofar as it is not tied down to work.” (Bataille, 1988a, p. 58). The twist in the AI regime is that consumption and work increasingly blur: leisure becomes training signal, joking becomes metric, idle generation becomes infrastructure load, and “waste” becomes an input to growth.
3.8 Politics of Expenditure: Governance After the Head
The end of the state is freedom.
—Baruch Spinoza, Theological-Political Treatise
Politics is the question of where surplus is routed, and who is allowed to call that routing “necessary.” The AI stack organizes expenditure across energy, labor, attention, meaning. Once sovereignty becomes infrastructural and acephalic, governance cannot be reduced to “fixing bad answers.” It has to engage the deeper problem: how an economy of surplus spends the world, quietly, by default, at scale.
3.8.1 The Separation Technique
The contemporary stack is built to keep sacrifice unspeakable. That is its elegance. The mine does not appear in the chat window. The labeling factory does not appear in the demo. The moderator’s nervous system does not appear in the policy language. The water pulled from a watershed does not appear in the metaphor of cloud. Loss persists, but it is made administratively quiet <outsourced, naturalized, far away>. This is not an accident. It is a technique of intensification. A system can expand indefinitely if the sites where it spends the world are structurally kept outside the frame of the world it shows to users. Bataille’s sacred is not purity; it is separation. Here, separation becomes infrastructure. So the first political move is not moral condemnation. It is exposure: making the sacrificial structure legible enough to be disputed.
3.8.2 Beyond the Theater of Explainability
Explainability is often treated as the royal road to responsibility. If we can see why the model did something, we can govern it. But explainability is also a theater: it gives an image of control while leaving the primary flows untouched. A politics of expenditure asks different questions.
Not only: is the model aligned? But: what did alignment cost, and who paid?
Not only: is the output safe? But: what labor pipeline makes safety possible?
Not only: can the system explain itself? But: what is the system allowed to consume?
It is entirely possible to build a model that refuses certain outputs while the pipeline stabilizing that refusal depends on precarious human labor and planetary burn. It is entirely possible to proclaim responsibility while treating the conditions of responsibility as someone else’s problem. Moral language becomes laundering when it floats above the burn. Energy, water, supply chains, annotation, moderation, provenance—these are not externalities orbiting the “real” AI; they are the real AI. If the model is the consecrated remainder, then these are the rites that produce it.
3.8.3 Responsibility Without a Head
Acephaly makes the familiar blame-drama collapse. We look for a head <an executive, a lab, a user, a model> and the structure keeps slipping away. Harm is real; authorship dissolves. Everyone can point elsewhere. This diffusion is not a philosophical curiosity; it is how contemporary power survives contact with critique. It distributes command while dissolving accountability. Control becomes atmospheric. So responsibility has to be reconceived as leverage, not intention. Who sets the defaults. Who sets the thresholds. Who controls the gates. Who chooses where deployment is permitted and where it is prohibited. Who decides logging. Who decides evaluation. Who decides scale. In a headless apparatus, constraints are what governance looks like.
3.8.4 The Hinge: Glorious and Catastrophic Expenditure
Bataille’s distinction matters here because it refuses the fantasy of zero-waste modernity. The accursed share must be spent. The political problem is to classify and contest the routes. “Our ignorance… deprives us of the choice… [and] consigns men and their works to catastrophic destructions” (Bataille, 1988a, pp. 23–24). In the AI regime, “catastrophic” need not mean cinematic apocalypse. It can also mean slow, distributed ruin: ecological burn normalized as progress; labor harm treated as a supply-chain detail; public sensemaking drowned in a landfill of signs; coercive persuasion markets framed as “personalization”; militarized automation justified as competitive necessity. Some routes are structurally annihilating. Others, still costly, are less destructive: public inquiry, shared verification infrastructures, art and experiment, tools that thicken coordination rather than harvest it. This is not a call to “innovate responsibly.” It is a classification problem about futures: where do the losses land, and what kinds of social worlds do they compose?
3.8.5 Counter-Rituals
Bataillean politics cannot be purely managerial. Moralism represses what it cannot metabolize. Governance that pretends it can eliminate excess will simply displace it into darker zones: more outsourced, more violent, less speakable. What becomes necessary instead are counter-rituals: practices that interrupt sacrificial circuits rather than merely polishing outputs. Not purity. Not exit fantasies. Limits. Some burns must be made unacceptable. Some disclosures must stop being normalized. Some uses must become taboo—not because taboo abolishes desire, but because taboo is one of the few cultural tools that prevents catastrophe from becoming the default outlet for surplus. This is also where sovereignty returns with a sharper meaning. Bataille’s sovereign is not the commander of others but the figure who interrupts servile deferral—who refuses to sacrifice the present entirely to the future. “The sovereign restores to the primacy of the present the surplus share of production…” (Bataille, 1991, p. 241). Read politically, that is not a lifestyle slogan; it is a governance principle: surplus must be brought back into contestable present arrangements <accounted for, bounded, redirected> rather than being allowed to disappear into pipelines that spend it “naturally.”
Refusal, then, is not merely personal ethics. It is one of the few ways an acephalic system can be governed at all: by denying it some of the surplus it expects to convert so expenditure can no longer hide behind inevitability. The right to non-participation becomes political rather than personal. Synthetic intimacy is not only companionship; it is also a structure that invites disclosure under opacity. The interface is soft; the pipeline is not. Sovereignty returns here in Bataille’s sense: not domination, but the capacity to withhold.
In practical terms, counter-rituals are institutional levers that decide how surplus is routed. One lever is public accounting of the burn: mandatory, comparable reporting on energy, water, and hardware supply chains at the level of training runs and deployment services—not as corporate PR, but as audited disclosure. Another is compute budgeting: not a romantic ban on models, but a political decision about where high-intensity computation is permissible and where it is not. A third is labor-visible safety: standards that force moderation and labeling labor into the governance frame—minimum protections, mental health support, transparency about exposure, collective bargaining rights, enforceable limits on “infinite” human review. A fourth is data non-participation as a right: robust opt-out defaults, provenance tracking, and enforceable restrictions on training uses, especially for intimate, medical, or biometric traces. None of this restores a missing head. That is the point. Governance after the head is governance by rerouting: making expenditure visible, contestable, and bounded, so that surplus does not “solve” itself through catastrophe.
Conclusion
Simplifying to the extreme… incredulity toward metanarratives.
—Jean-François Lyotard
Day by day make it new.
—Ezra Pound
To challenge every status quo…
—Genesis P-Orridge
Reading contemporary AI through Bataille doesn’t “solve” AI by moralizing it, or by promising a clearer dashboard of harms. It changes the scale of the question. What appears, at the interface, as intelligence and convenience is also a choreography of expenditure: energy converted into heat, archives converted into weights, attention converted into signal, and uncertainty converted into fluent form. The point is not that AI is uniquely wasteful; it is that it organizes waste with unusual discipline while narrating itself as pure utility.
The hinge, for this essay, is simple and non-negotiable: surplus does not vanish. It must be spent. The danger is not expenditure as such, but the blindness that lets expenditure default into catastrophe—slow ecological attrition normalized as progress, labor harm rendered invisible, institutions flooded with plausibility, publics trained to confuse fluency with warrant. The political task is therefore not to fantasize about a frictionless intelligence, but to contest the routing of what is already being spent, so losses do not remain automatic, outsourced, and politically mute.
To call AI bat⟨AI⟩lle is not a pun but a diagnosis: models as sacrificial condensations of energetic and cultural surplus; stacks as acephalic sovereigns; hallucination as sovereign expenditure inside a regime that officially worships utility. There is no head to sever, no stable subject to reinstate. What remains possible is a practice of lucid counter-ritual: learning to notice where sacrifice already happens, where transgression already organizes our relation to machines, and deciding together which losses we accept—and which we refuse.
In that contested practice, something like a sovereignty after the head may still emerge: not the sovereignty of a king or an artificial mind, but the sovereignty of a species learning <late, awkwardly, but not necessarily too late> to govern the routing of its own excess.
-
O’Brien, M., & Fingerhut, H. (2023, September 12). Artificial intelligence technology behind ChatGPT was built in Iowa — with a lot of water. Iowa Public Radio.
-
Merken, S. (2023, June 26). New York lawyers sanctioned for using fake ChatGPT cases in legal brief. Reuters. https://www.reuters.com/legal/new-york-lawyers-sanction ed-using-fake-chatgpt-cases-legal-brief-2023-06-22/
-
Simonite, T. (2021, April 26). This researcher says AI is neither artificial nor intelligent. WIRED. https://www.wired.com/story/researcher-says-ai-not-artificial-intelli gent/
- Calma, J. (2025, August 21). Google says a typical AI text prompt only uses 5 drops of water — experts say that’s misleading. The Verge. https://www.theverge.com/report/763080/google-ai-gemini-water-e nergy-emissions-study
- Bentzen, N. (2025, December). Information manipulation in the age of generative artificial intelligence (Briefing No. PE 779.259). European Parliamentary Research Service (EPRS), European Parliament. https://www.europarl.europa.eu/RegData/etudes/BRIE/2025/7792 59/EPRS_BRI(2025)779259_EN.pdf
Bibliography
Andjelkovic, F. (2022). Prosthetic Gods, Projected Monsters: Technology, Insanity, and Imagining the Human Subject in H.P.
Lovecraft and Georges Bataille. The journal of gods and monsters, 3(1), 17-34.
Bataille, G. (1985). Visions of Excess: Selected Writings, 1927–1939 (A. Stoekl, Trans.). Minneapolis, MN: University of Minnesota Press.
Bataille, G. (1986). Erotism: Death and Sensuality (M. Dalwood, Trans.). San Francisco, CA: City Lights.
Bataille, G. (1988a). The Accursed Share: An Essay on General Economy, Vol. 1: Consumption (R. Hurley, Trans.). New York, NY: Zone Books.
Bataille, G. (1988b). Inner Experience (L. Boldt, Trans.). Albany, NY: State University of New York Press.
Bataille, G. (1989). Theory of Religion (R. Hurley, Trans.). New York, NY: Zone Books.
Bataille, G. (1990). Hegel, death and sacrifice. Yale French Studies, 78, 9–28.
Bataille, G. (1991). The Accursed Share: An Essay on General Economy (Vol. 2, The history of eroticism; Vol. 3, Sovereignty) (R. Hurley, Trans.). New York, NY: Zone Books.
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity.
Birhane, A. (2020). Algorithmic colonization of Africa. SCRIPTed, 17(2), 389–409. https://doi.org/10.2966/scrip.170220.389
Botting, F., & Wilson, S. (Eds.). (1997). The Bataille Reader. Oxford, UK: Blackwell.
Bratton, B. H. (2016). The Stack: On Software and Sovereignty. Cambridge, MA: MIT Press.
Brassier, R. (2007). Nihil unbound: Enlightenment and extinction. Palgrave Macmillan.
Brusseau, J. (2023). The AI human condition is a dilemma between privacy and selfhood. PhilArchive preprint.
Chun, W. H. K. (2006). Control and freedom: Power and paranoia in the age of fiber optics. The MIT Press.
Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press.
Dauvergne, P. (2022). Is artificial intelligence greening global supply chains? Exposing the political economy of environmental costs. Review of International Political Economy, 29(3), 696–718. https://doi.org/10.1080/09692290.2020.1814381
Easterling, K. (2014). Extrastatecraft: The power of infrastructure space. Verso.
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Houghton Mifflin Harcourt.
Hegarty, P. (2000). Georges Bataille: Core Cultural Theorist. London, UK: SAGE.
Hickman, S. C. (2019). The horror of capitalism: Consuming the body of God. https://socialecologies.wordpress.com/2019/04/29/the-horror-of-ca pitalism-consuming-the-body-of-god/
Hui, Y. (2016). The Question Concerning Technology in China: An Essay in Cosmotechnics. Falmouth, UK: Urbanomic / Minneapolis, MN: University of Minnesota Press.
Ireland, A. (2017). The Alien Inside. Art + Australia: Extraterritoriality, 53.2(One), 42–47.
Irwin, A. C. (1993). Ecstasy, Sacrifice, Communication: Bataille on Religion and Inner Experience. Soundings: An Interdisciplinary Journal, 76(1), 105–128. http://www.jstor.org/stable/41178621
Kaplan, M. (2019). The digital potlatch: The uses of uselessness in the digital economy. New Media & Society, 21(9):1947-1966.
Kay, J., Kasirzadeh, A., & Mohamed, S. (2024). Epistemic injustice in generative AI (arXiv:2408.11441v1). arXiv. https://doi.org/10.48550/arXiv.2408.11441
Land, N. (1992). The Thirst for Annihilation: Georges Bataille and Virulent Nihilism. London, UK: Routledge.
Lemmens, P. (2020). Other turnings: Yuk Hui’s pluralist cosmotechnics in between Heidegger’s ontological and Stiegler’s organological understanding of technology. Angelaki, 25(4).
Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making AI less “thirsty”: Uncovering and addressing the secret water footprint of AI models (arXiv:2304.03271). arXiv. https://doi.org/10.48550/arXiv.2304.03271
Meillassoux, Q. (2008). After finitude: An essay on the necessity of contingency (R. Brassier, Trans.). Continuum.
Mohamed, S., Png, M.-T., & Isaac, W. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33, 659–684. https://doi.org/10.1007/s13347-020-00405-
Muldoon, J., & Wu, B. A. (2023). Artificial intelligence in the colonial matrix of power. Philosophy & Technology, 36(4), Article 80. https://doi.org/10.1007/s13347-023-00687-8
Negarestani, R. (2018). Intelligence and Spirit. Falmouth, UK: Urbanomic.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
Noys, B. (2014). Malign velocities: Accelerationism and capitalism. Zer0 Books.
Noys, B. (2000). Georges Bataille: A critical introduction. Pluto Press.
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
Parisi, L. (2019). The alien subject of AI. In D. Chandler & C. Fuchs (Eds.), Digital Objects, Digital Subjects: Interdisciplinary Perspectives on Capitalism, Labour and Politics in the Age of Big Data (pp. 231–247). London, UK: University of Westminster Press.
Pasquinelli, M. (2023). The eye of the master: A social history of artificial intelligence. Verso.
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
Pawlett, W. (2018). The Sacred, Heterology and Transparency: Between Bataille and Baudrillard. Theory, Culture & Society, 35(4-5), 175-191. https://doi.org/10.1177/0263276418769729
Peredrii, B. (2025). The Acceleration of Transgression in Nick Land’s Philosophy. Humanitarian Vision 11(1):22-29. DOI:10.23939/shv2025.01.022
Ramos Mejía, T. (2025). The Imperative of Co-Existence: On Technique in Georges Bataille’s Social Theory. Technophany, A Journal for Philosophy and Technology 3 (1): 1-21. https://doi.org/10.54195/technophany.19219.
Rehn, A. (2023). Chapter 2. Bataille, the Poverty of Innovation Theory and the Rise of Creation Studies. Dans F. De March et J. Dumond Un regard critique sur la gestion avec l'œil de Georges Bataille (p. 230-242). EMS Éditions. https://doi.org/10.3917/ems.demar.2023.01.0230.
Righi, C. (2020). New Technologies’ Promise to the Self and the Becoming of the Sacred: Insights from Georges Bataille’s Concept of Transgression. In: Hensold, J., Kynes, J., Öhlmann, P., Rau, V., Schinagl, R., Taleb, A. (eds) Religion in Motion. Springer, Cham. https://doi.org/10.1007/978-3-030-41388-0_6
Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. Yale University Press. Rouvroy, A., & Berns, T. (2013). Algorithmic governmentality and prospects of emancipation: Disparateness as a precondition for individuation through relationships? Réseaux, 2013/1(177), 163–196. https://doi.org/10.3917/res.177.0163
Shilina, S. Acephaly: Toward a Philosophy of Distributed Power, Knowledge, and Care (April 18, 2025).
http://dx.doi.org/10.2139/ssrn.5253208
Srnicek, N., & Williams, A. (2013). #ACCELERATE: Manifesto for an accelerationist politics. Critical Legal Thinking. Sørensen, A. (2012). On a universal scale: Economy in Georges Bataille’s general economy. Philosophy & Social Criticism, 38 (2):169-197 (2012)
Stapleton, J. (2022). The Intoxication of Destruction in Theory, Culture and Media: A Philosophy of Expenditure after Georges Bataille. Amsterdam, NL: Amsterdam University Press.
Stoekl, A. (2007). Bataille’s Peak: Energy, Religion, and Postsustainability (NED-New edition). University of Minnesota Press. http://www.jstor.org/stable/10.5749/j.ctttv59s
Styhre, A. (2002). Information and communication technology and the excess(es) of information: An introduction to Georges Bataille’s general economy. Ephemera: Critical Dialogues on Organization, 2(2), 105–121.
Szepanski, A. (2024). The Ecstatic of the Excess in Bataille, Baudrillard, and Marx. In: Capitalism in the Age of Catastrophe. Palgrave Insights into Apocalypse Economics. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-57754-3_1
Terranova, T. (2004). Network culture: Politics for the information age. Pluto Press.
Tomasi, A. (2007). Technology and Intimacy in the Philosophy of Georges Bataille. Hum Stud 30, 411–428. https://doi.org/10.1007/s10746-007-9072-7
Weinstein, M. A. (2001). Virtual Bataille. Parallax, 7(1), 76–80. https://doi.org/10.1080/13534640010015944
Wijarnarko, A., & Maharani, S. (2024). Human’s relationship with technology in Nick Land’s accelerationism. Jurnal Filsafat, 34(1). https://doi.org/10.22146/jf.86596
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York, NY: PublicAffairs.
Zwier, J., Blok, V. (2020). Energetic Ethics. Georges Bataille in the Anthropocene . In: Valera, L., Castilla, J. (eds) Global Changes. Ethics of Science and Technology Assessment, vol 46. Springer, Cham. https://doi.org/10.1007/978-3-030-29443-4_15
related entries
What is Artificial Experience (AX)? Why the Application Layer Is the Interface and the Human Is the Limit
William Morgan
publication
bat{AI}lle
On Acephalic Intelligence and the General Economy of Computation
Part ISasha Shilina
publication
(...)Sasha Shilina’s “On Acephalic Intelligence and the General Economy of Computation” argues that AI should be read less as “frictionless” interface intelligence and more as a planetary regime of expenditure—water, electricity, labor, extraction, and epistemic overflow—whose costs are aestheticized away by the language of optimization. Drawing on Georges Bataille, Shilina reframes AI through “general economy,” where surplus is unavoidable and must be spent, and develops “acephalic intelligence” to name AI as a headless assemblage of infrastructures, standards, institutions, and data/compute flows that concentrates power without a single sovereign subject.
The essay links contemporary AI pathologies—sacrificial hidden work, infrastructural governance, and synthetic plausibility that outpaces truth—to broader theoretical constellations (postwar political economy, media theory, accelerationist and inhuman turns, and cosmotechnics), insisting the core stakes extend beyond bias or transparency to questions of sovereignty, sacrifice, waste, and the possibility of refusal or counter-design within an expanding computational order.(...more)Text Box: Eschatology of the Digital Visage
Algorithmic Flesh and Confessional
Aesthetics in the Work of Ian Margo
Giorgi Vachnadze
publication
(...) In this sense, its failure is generative. To fail to be computable is to refuse the enclosure of meaning. Margo’s work is, once again, not Turing-Computable, it’s Deleuze-Computable, that is to say; demonically machinic. To become unbaptized data is to remain in the domain of the Real. Like the Eucharist consumed without transubstantiation, the Wet Box leaves a pure residue, an aftertaste of what should have become body, and didn’t; it became flesh. The interface compounds the syntax error in stutters. The glitch is a processual breaking in execution and expectation. We expected sense. We were given endless remainder – lack...(more)